Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 19(5): e0298373, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38691542

RESUMO

Pulse repetition interval modulation (PRIM) is integral to radar identification in modern electronic support measure (ESM) and electronic intelligence (ELINT) systems. Various distortions, including missing pulses, spurious pulses, unintended jitters, and noise from radar antenna scans, often hinder the accurate recognition of PRIM. This research introduces a novel three-stage approach for PRIM recognition, emphasizing the innovative use of PRI sound. A transfer learning-aided deep convolutional neural network (DCNN) is initially used for feature extraction. This is followed by an extreme learning machine (ELM) for real-time PRIM classification. Finally, a gray wolf optimizer (GWO) refines the network's robustness. To evaluate the proposed method, we develop a real experimental dataset consisting of sound of six common PRI patterns. We utilized eight pre-trained DCNN architectures for evaluation, with VGG16 and ResNet50V2 notably achieving recognition accuracies of 97.53% and 96.92%. Integrating ELM and GWO further optimized the accuracy rates to 98.80% and 97.58. This research advances radar identification by offering an enhanced method for PRIM recognition, emphasizing the potential of PRI sound to address real-world distortions in ESM and ELINT systems.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Som , Radar , Algoritmos , Reconhecimento Automatizado de Padrão/métodos
2.
Heliyon ; 10(7): e28147, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38689992

RESUMO

Deep Convolutional Neural Networks (DCNNs) have shown remarkable success in image classification tasks, but optimizing their hyperparameters can be challenging due to their complex structure. This paper develops the Adaptive Habitat Biogeography-Based Optimizer (AHBBO) for tuning the hyperparameters of DCNNs in image classification tasks. In complicated optimization problems, the BBO suffers from premature convergence and insufficient exploration. In this regard, an adaptable habitat is presented as a solution to these problems; it would permit variable habitat sizes and regulated mutation. Better optimization performance and a greater chance of finding high-quality solutions across a wide range of problem domains are the results of this modification's increased exploration and population diversity. AHBBO is tested on 53 benchmark optimization functions and demonstrates its effectiveness in improving initial stochastic solutions and converging faster to the optimum. Furthermore, DCNN-AHBBO is compared to 23 well-known image classifiers on nine challenging image classification problems and shows superior performance in reducing the error rate by up to 5.14%. Our proposed algorithm outperforms 13 benchmark classifiers in 87 out of 95 evaluations, providing a high-performance and reliable solution for optimizing DNNs in image classification tasks. This research contributes to the field of deep learning by proposing a new optimization algorithm that can improve the efficiency of deep neural networks in image classification.

3.
Heliyon ; 9(9): e19431, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37809869

RESUMO

Financial accounting information systems (FAISs) are one of the scientific fields where deep learning (DL) and swarm-based algorithms have recently seen increased use. Nevertheless, the application of these hybrid networks has become more challenging as a result of the heightened complexity imposed by extensive datasets. In order to tackle this issue, we present a new methodology that integrates the twin adjustable reinforced chimp optimization algorithm (TAR-CHOA) with deep long short-term memory (DLSTM) to forecast profits using FAISs. The main contribution of this research is the development of the TAR-CHOA algorithm, which improves the efficacy of profit prediction models. Moreover, due to the unavailability of an appropriate dataset for this particular problem, a newly formed dataset has been constructed by employing fifteen inputs based on the prior Chinese stock market Kaggle dataset. In this study, we have designed and assessed five DLSTM-based optimization algorithms, for forecasting financial accounting profit. The performance of various models has been evaluated and ranked for financial accounting profit prediction. According to our research, the best-performing DL-based model is DLSTM-TAR-CHOA. One constraint of our methodology is its dependence on historical financial accounting data, operating under the assumption that past patterns and relationships will persist in the future. Furthermore, it is important to note that the efficacy of our models may differ based on the distinct attributes and fluctuations observed in various financial markets. These identified limitations present potential avenues for future research to investigate alternative methodologies and broaden the extent of our findings.

4.
Expert Syst Appl ; 213: 119206, 2023 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-36348736

RESUMO

Applying Deep Learning (DL) in radiological images (i.e., chest X-rays) is emerging because of the necessity of having accurate and fast COVID-19 detectors. Deep Convolutional Neural Networks (DCNN) have been typically used as robust COVID-19 positive case detectors in these approaches. Such DCCNs tend to utilize Gradient Descent-Based (GDB) algorithms as the last fully-connected layers' trainers. Although GDB training algorithms have simple structures and fast convergence rates for cases with large training samples, they suffer from the manual tuning of numerous parameters, getting stuck in local minima, large training samples set requirements, and inherently sequential procedures. It is exceedingly challenging to parallelize them with Graphics Processing Units (GPU). Consequently, the Chimp Optimization Algorithm (ChOA) is presented for training the DCNN's fully connected layers in light of the scarcity of a big COVID-19 training dataset and for the purpose of developing a fast COVID-19 detector with the capability of parallel implementation. In addition, two publicly accessible datasets termed COVID-Xray-5 k and COVIDetectioNet are used to benchmark the proposed detector known as DCCN-Chimp. In order to make a fair comparison, two structures are proposed: i-6c-2 s-12c-2 s and i-8c-2 s-16c-2 s, all of which have had their hyperparameters fine-tuned. The outcomes are evaluated in comparison to standard DCNN, Hybrid DCNN plus Genetic Algorithm (DCNN-GA), and Matched Subspace classifier with Adaptive Dictionaries (MSAD). Due to the large variation in results, we employ a weighted average of the ensemble of ten trained DCNN-ChOA, with the validation accuracy of the weights being used to determine the final weights. The validation accuracy for the mixed ensemble DCNN-ChOA is 99.11%. LeNet-5 DCNN's ensemble detection accuracy on COVID-19 is 84.58%. Comparatively, the suggested DCNN-ChOA yields over 99.11% accurate detection with a false alarm rate of less than 0.89%. The outcomes show that the DCCN-Chimp can deliver noticeably superior results than the comparable detectors. The Class Activation Map (CAM) is another tool used in this study to identify probable COVID-19-infected areas. Results show that highlighted regions are completely connected with clinical outcomes, which has been verified by experts.

5.
Soft comput ; 27(6): 3307-3326, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-33994846

RESUMO

The COVID19 pandemic globally and significantly has affected the life and health of many communities. The early detection of infected patients is effective in fighting COVID19. Using radiology (X-Ray) images is, perhaps, the fastest way to diagnose the patients. Thereby, deep Convolutional Neural Networks (CNNs) can be considered as applicable tools to diagnose COVID19 positive cases. Due to the complicated architecture of a deep CNN, its real-time training and testing become a challenging problem. This paper proposes using the Extreme Learning Machine (ELM) instead of the last fully connected layer to address this deficiency. However, the parameters' stochastic tuning of ELM's supervised section causes the final model unreliability. Therefore, to cope with this problem and maintain network reliability, the sine-cosine algorithm was utilized to tune the ELM's parameters. The designed network is then benchmarked on the COVID-Xray-5k dataset, and the results are verified by a comparative study with canonical deep CNN, ELM optimized by cuckoo search, ELM optimized by genetic algorithm, and ELM optimized by whale optimization algorithm. The proposed approach outperforms comparative benchmarks with a final accuracy of 98.83% on the COVID-Xray-5k dataset, leading to a relative error reduction of 2.33% compared to a canonical deep CNN. Even more critical, the designed network's training time is only 0.9421 ms and the overall detection test time for 3100 images is 2.721 s.

6.
Comput Intell Neurosci ; 2022: 3216400, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36304739

RESUMO

The existence of various sounds from different natural and unnatural sources in the deep sea has caused the classification and identification of marine mammals intending to identify different endangered species to become one of the topics of interest for researchers and activist fields. In this paper, first, an experimental data set was created using a designed scenario. The whale optimization algorithm (WOA) is then used to train the multilayer perceptron neural network (MLP-NN). However, due to the large size of the data, the algorithm has not determined a clear boundary between the exploration and extraction phases. Next, to support this shortcoming, the fuzzy inference is used as a new approach to developing and upgrading WOA called FWOA. Fuzzy inference by setting FWOA control parameters can well define the boundary between the two phases of exploration and extraction. To measure the performance of the designed categorizer, in addition to using it to categorize benchmark datasets, five benchmarking algorithms CVOA, WOA, ChOA, BWO, and PGO were also used for MLPNN training. The measured criteria are concurrency speed, ability to avoid local optimization, and the classification rate. The simulation results on the obtained data set showed that, respectively, the classification rate in MLPFWOA, MLP-CVOA, MLP-WOA, MLP-ChOA, MLP-BWO, and MLP-PGO classifiers is equal to 94.98, 92.80, 91.34, 90.24, 89.04, and 88.10. As a result, MLP-FWOA performed better than other algorithms.


Assuntos
Redes Neurais de Computação , Baleias , Animais , Algoritmos , Simulação por Computador
7.
Med Biol Eng Comput ; 60(10): 2931-2949, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35962266

RESUMO

The prevalence of the COVID-19 virus and its variants has influenced all aspects of our life, and therefore, the precise diagnosis of this disease is vital. If a polymerase chain reaction test for a subject is negative, but he/she cannot easily breathe, taking a computed tomography (CT) image from his/her lung is urgently recommended. This study aims to optimize a deep convolution neural network (DCNN) structure to increase the COVID-19 diagnosis accuracy in lung CT images. This paper employs the sine-cosine algorithm (SCA) to optimize the structure of DCNN to take raw CT images and determine their status. Three improvements based on regular SCA are proposed to enhance both the accuracy and speed of the results. First, a new encoding approach is proposed based on the internet protocol (IP) address. Then, an enfeebled layer is proposed to generate a variable-length DCNN. The suggested model is examined over the COVID-CT and SARS-CoV-2 datasets. The proposed method is compared to a standard DCNN and seven variable-length models in terms of five known metrics, including sensitivity, accuracy, specificity, F1-score, precision, and receiver operative curve (ROC) and precision-recall curves. The results demonstrate that the proposed DCNN-IPSCA surpasses other benchmarks, achieving final accuracy of (98.32% and 98.01%), the sensitivity of (97.22% and 96.23%), and specificity of (96.77% and 96.44%) on the SARS-CoV-2 and COVID-CT datasets, respectively. Also, the proposed DCNN-IPSCA performs much better than the standard DCNN, with GPU and CPU training times, which are 387.69 and 63.10 times faster, respectively.


Assuntos
COVID-19 , Algoritmos , COVID-19/diagnóstico por imagem , Teste para COVID-19 , Feminino , Humanos , Masculino , Redes Neurais de Computação , SARS-CoV-2 , Tomografia Computadorizada por Raios X/métodos
8.
Comput Intell Neurosci ; 2022: 5677961, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35965746

RESUMO

Artificial intelligence (AI) techniques have been considered effective technologies in diagnosing and breaking the transmission chain of COVID-19 disease. Recent research uses the deep convolution neural network (DCNN) as the discoverer or classifier of COVID-19 X-ray images. The most challenging part of neural networks is the subject of their training. Descent-based (GDB) algorithms have long been used to train fullymconnected layer (FCL) at DCNN. Despite the ability of GDBs to run and converge quickly in some applications, their disadvantage is the manual adjustment of many parameters. Therefore, it is not easy to parallelize them with graphics processing units (GPUs). Therefore, in this paper, the whale optimization algorithm (WOA) evolved by a fuzzy system called FuzzyWOA is proposed for DCNN training. With accurate and appropriate tuning of WOA's control parameters, the fuzzy system defines the boundary between the exploration and extraction phases in the search space. It causes the development and upgrade of WOA. To evaluate the performance and capability of the proposed DCNN-FuzzyWOA model, a publicly available database called COVID-Xray-5k is used. DCNN-PSO, DCNN-GA, and LeNet-5 benchmark models are used for fair comparisons. Comparative parameters include accuracy, processing time, standard deviation (STD), curves of ROC and precision-recall, and F1-Score. The results showed that the FuzzyWOA training algorithm with 20 epochs was able to achieve 100% accuracy, at a processing time of 880.44 s with an F1-Score equal to 100%. Structurally, the i-6c-2s-12c-2s model achieved better results than the i-8c-2s-16c-2s model. However, the results of using FuzzyWOA for both models have been very encouraging compared to particle swarm optimization, genetic algorithm, and LeNet-5 methods.


Assuntos
Inteligência Artificial , COVID-19 , Algoritmos , COVID-19/diagnóstico por imagem , Humanos , Redes Neurais de Computação , Raios X
9.
Artigo em Inglês | MEDLINE | ID: mdl-35291314

RESUMO

Chimp optimization algorithm (ChOA) is a robust nature-inspired technique, which was recently proposed for addressing real-world challenging engineering problems. Due to the novelty of the ChOA, there is room for its improvement. Recognition and classification of marine mammals using artificial neural networks (ANNs) are high-dimensional challenging problems. In order to address this problem, this paper proposed the using of ChOA as ANN's trainer. However, evolving ANNs using metaheuristic algorithms suffers from high complexity and processing time. In order to address this shortcoming, this paper proposes the fuzzy logic to adjust the ChOA's control parameters (Fuzzy-ChOA) for tuning the relationship between exploration and exploitation phases. In this regard, we collect underwater marine mammals sounds and then produce an experimental dataset. After pre-processing and feature extraction, the ANN is used as a classifier. Besides, for having a fair comparison, we used a benchmark audio database of marine mammals. The comparison algorithms include ChOA, coronavirus optimization algorithm, harris hawks optimization, black widow optimization algorithm, Kalman filter benchmark algorithms, and also comparative benchmarks include convergence speed, local optimal avoidance ability, classification rate, and receiver operating characteristics (ROC). The simulation results show that the proposed fuzzy model can tune the boundary between the exploration and extraction phases. The convergence curve and ROC confirm that the convergence rate and performance of the designed recognizer are better than benchmark algorithms.

10.
Wirel Pers Commun ; 124(2): 1355-1374, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34873379

RESUMO

The early diagnosis and the accurate separation of COVID-19 from non-COVID-19 cases based on pulmonary diffuse airspace opacities is one of the challenges facing researchers. Recently, researchers try to exploit the Deep Learning (DL) method's capability to assist clinicians and radiologists in diagnosing positive COVID-19 cases from chest X-ray images. In this approach, DL models, especially Deep Convolutional Neural Networks (DCNN), propose real-time, automated effective models to detect COVID-19 cases. However, conventional DCNNs usually use Gradient Descent-based approaches for training fully connected layers. Although GD-based Training (GBT) methods are easy to implement and fast in the process, they demand numerous manual parameter tuning to make them optimal. Besides, the GBT's procedure is inherently sequential, thereby parallelizing them with Graphics Processing Units is very difficult. Therefore, for the sake of having a real-time COVID-19 detector with parallel implementation capability, this paper proposes the use of the Whale Optimization Algorithm for training fully connected layers. The designed detector is then benchmarked on a verified dataset called COVID-Xray-5k, and the results are verified by a comparative study with classic DCNN, DUICM, and Matched Subspace classifier with Adaptive Dictionaries. The results show that the proposed model with an average accuracy of 99.06% provides 1.87% better performance than the best comparison model. The paper also considers the concept of Class Activation Map to detect the regions potentially infected by the virus. This was found to correlate with clinical results, as confirmed by experts. Although results are auspicious, further investigation is needed on a larger dataset of COVID-19 images to have a more comprehensive evaluation of accuracy rates.

11.
Biomed Signal Process Control ; 68: 102764, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33995562

RESUMO

Real-time detection of COVID-19 using radiological images has gained priority due to the increasing demand for fast diagnosis of COVID-19 cases. This paper introduces a novel two-phase approach for classifying chest X-ray images. Deep Learning (DL) methods fail to cover these aspects since training and fine-tuning the model's parameters consume much time. In this approach, the first phase comes to train a deep CNN working as a feature extractor, and the second phase comes to use Extreme Learning Machines (ELMs) for real-time detection. The main drawback of ELMs is to meet the need of a large number of hidden-layer nodes to gain a reliable and accurate detector in applying image processing since the detective performance remarkably depends on the setting of initial weights and biases. Therefore, this paper uses Chimp Optimization Algorithm (ChOA) to improve results and increase the reliability of the network while maintaining real-time capability. The designed detector is to be benchmarked on the COVID-Xray-5k and COVIDetectioNet datasets, and the results are verified by comparing it with the classic DCNN, Genetic Algorithm optimized ELM (GA-ELM), Cuckoo Search optimized ELM (CS-ELM), and Whale Optimization Algorithm optimized ELM (WOA-ELM). The proposed approach outperforms other comparative benchmarks with 98.25 % and 99.11 % as ultimate accuracy on the COVID-Xray-5k and COVIDetectioNet datasets, respectively, and it led relative error to reduce as the amount of 1.75 % and 1.01 % as compared to a convolutional CNN. More importantly, the time needed for training deep ChOA-ELM is only 0.9474 milliseconds, and the overall testing time for 3100 images is 2.937 s.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...